Skip to content

feat(vertex-anthropic): add support for custom Vertex AI Anthropic model#1651

Open
Abmarne wants to merge 4 commits intoeyaltoledano:nextfrom
Abmarne:main
Open

feat(vertex-anthropic): add support for custom Vertex AI Anthropic model#1651
Abmarne wants to merge 4 commits intoeyaltoledano:nextfrom
Abmarne:main

Conversation

@Abmarne
Copy link

@Abmarne Abmarne commented Mar 11, 2026

Fixes:#1648

This document describes the Vertex Anthropic provider implementation for Task Master, which enables using Claude models through Google Vertex AI.

Overview

The Vertex Anthropic provider (vertex-anthropic) allows users to access Claude models (e.g., Claude Sonnet, Claude Opus) through Google Vertex AI's publishers/anthropic endpoint, rather than the standard publishers/google endpoint used for Gemini models.

Background

Google Vertex AI hosts models from multiple providers:

  • Google models (Gemini) → publishers/google/models/{model}:generateContent
  • Anthropic models (Claude) → publishers/anthropic/models/{model}:generateContent

The standard Vertex provider only uses the Google endpoint, which fails when requesting Claude models. This provider uses the @ai-sdk/google-vertex/anthropic subpath export to route requests correctly.

Implementation

Files Added/Modified

File Change Description
src/ai-providers/google-vertex-anthropic.js New Provider implementation using createVertexAnthropic
src/ai-providers/index.js Modified Export VertexAnthropicProvider
scripts/modules/ai-services-unified.js Modified Register provider and configuration handling
packages/tm-core/src/common/constants/providers.ts Modified Add VERTEX_ANTHROPIC to CUSTOM_PROVIDERS
apps/cli/src/commands/models/types.ts Modified Add VERTEX_ANTHROPIC to CUSTOM_PROVIDER_IDS
apps/cli/src/commands/models/custom-providers.ts Modified Add custom provider configuration
scripts/modules/task-manager/models.js Modified Add model validation handling
tests/unit/scripts/modules/task-manager/models-baseurl.test.js Modified Update mock constants

Provider Class

// src/ai-providers/google-vertex-anthropic.js
export class VertexAnthropicProvider extends BaseAIProvider {
    constructor() {
        super();
        this.name = 'Google Vertex AI (Anthropic)';
    }
    
    getClient(params) {
        // Uses createVertexAnthropic from @ai-sdk/google-vertex/anthropic
        return createVertexAnthropic({
            project: projectId,
            location,
            // ...auth options
        });
    }
}

Configuration

Environment Variables

The same environment variables are used as the standard Vertex provider:

Variable Required Description
GOOGLE_API_KEY Optional Google API key for authentication
GOOGLE_APPLICATION_CREDENTIALS Optional Path to service account JSON file
VERTEX_PROJECT_ID Required Google Cloud project ID
VERTEX_LOCATION Required Google Cloud region (e.g., us-central1)

At least one of GOOGLE_API_KEY or GOOGLE_APPLICATION_CREDENTIALS must be provided.

Config.json

{
  "models": {
    "main": {
      "provider": "vertex-anthropic",
      "modelId": "claude-sonnet-4-6"
    },
    "research": {
      "provider": "vertex-anthropic",
      "modelId": "claude-opus-4-6"
    },
    "fallback": {
      "provider": "anthropic",
      "modelId": "claude-3-5-sonnet-20241022"
    }
  }
}

CLI Usage

# Set a Claude model via Vertex AI for main role
task-master models --set-main claude-sonnet-4-6 --vertex-anthropic

# Interactive setup
task-master models --setup
# Then select "* Custom Vertex Anthropic model"

Supported Models

Available Claude models on Vertex AI include:

Model ID Description
claude-sonnet-4-6 Claude Sonnet 4.6
claude-opus-4-6 Claude Opus 4.6
claude-haiku-4-6 Claude Haiku 4.6

Note: Model availability depends on your Google Cloud project and region. Refer to Google Cloud documentation for the latest model listings.

MCP Configuration

For Cursor, VS Code, or other MCP-enabled editors:

{
  "mcpServers": {
    "task-master-ai": {
      "command": "npx",
      "args": ["-y", "task-master-ai"],
      "env": {
        "VERTEX_PROJECT_ID": "your-project-id",
        "VERTEX_LOCATION": "us-central1",
        "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/service-account.json",
        "ANTHROPIC_API_KEY": "your-anthropic-key-for-fallback"
      }
    }
  }
}

Then in chat:

Change the main model to claude-sonnet-4-6 with vertex-anthropic provider

Troubleshooting

Model Not Found Error

If you see:

API error: publishers/google/models/claude-sonnet-4-6@default:generateContent not found

Cause: Using the standard vertex provider instead of vertex-anthropic.

Solution: Change provider to vertex-anthropic in your configuration.

Authentication Errors

Vertex AI (Anthropic) requires authentication. Provide one of:
  • GOOGLE_API_KEY environment variable
  • GOOGLE_APPLICATION_CREDENTIALS pointing to a service account JSON file

Solution: Set one of the required authentication environment variables.

Project/Location Errors

Google Cloud project ID is required for Vertex AI. Set VERTEX_PROJECT_ID environment variable.

Solution: Set VERTEX_PROJECT_ID and VERTEX_LOCATION environment variables.

Comparison: vertex vs vertex-anthropic

Feature vertex vertex-anthropic
Endpoint publishers/google/... publishers/anthropic/...
Models Gemini (Gemini Pro, Flash, etc.) Claude (Sonnet, Opus, Haiku)
SDK Import @ai-sdk/google-vertex @ai-sdk/google-vertex/anthropic
Same Auth Yes Yes
Same Project/Location Yes Yes

References


Note

Medium Risk
Adds a new AI provider and wires it through CLI/config/model-selection flows; misconfiguration or missing env vars could break model setup or runtime provider resolution.

Overview
Adds a new vertex-anthropic custom provider to route Claude model calls through Google Vertex AI’s Anthropic publisher endpoint via a new VertexAnthropicProvider (built on the existing Vertex provider).

Updates the CLI model setup and config/model-setting logic to accept --vertex-anthropic/"* Custom Vertex Anthropic model" with the same Google auth env-var checks, and extends unified AI service execution to apply Vertex project/location configuration for both vertex and vertex-anthropic.

Includes a changeset for a minor release bump and adjusts unit-test mocks to recognize the new provider.

Written by Cursor Bugbot for commit eb5331b. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • New Features
    • Added Vertex Anthropic provider support — configure and use Vertex Anthropic alongside existing Vertex AI options with the same authentication and optional base-URL flow.
  • Documentation
    • Changelog entry added documenting the new provider.
  • Tests
    • Test updates to recognize and mock the new provider in provider-related flows.

…del provider

- Introduce VERTEX_ANTHROPIC custom provider with env var validation
- Add VertexAnthropicProvider to AI providers exports and unified service
- Enable handling of vertex-anthropic in task manager model settings
- Include Vertex Anthropic in provider constants and CLI model configs
- Add unit test mock support for vertex-anthropic provider ID
@changeset-bot
Copy link

changeset-bot bot commented Mar 11, 2026

🦋 Changeset detected

Latest commit: eb5331b

The changes in this PR will be included in the next version bump.

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

Adds a new Vertex Anthropic provider ("vertex-anthropic"): provider implementation, export, registration, CLI/type/constants updates, task-manager handling, tests, and a changelog entry to route Anthropic (Claude) models via Google Vertex AI.

Changes

Cohort / File(s) Summary
Provider Implementation & Export
src/ai-providers/google-vertex-anthropic.js, src/ai-providers/index.js
New VertexAnthropicProvider class that builds a Vertex Anthropic client (auth via API key or service account, project/location, optional baseURL, proxy fetch) and exported from the providers index.
Provider Constants & Tests
packages/tm-core/src/common/constants/providers.ts, tests/unit/scripts/modules/task-manager/models-baseurl.test.js, tests/unit/ai-services-unified.test.js
Added VERTEX_ANTHROPIC: 'vertex-anthropic' to provider constants and test mocks; added a mock VertexAnthropicProvider in tests.
Provider Registry / Unified Services
scripts/modules/ai-services-unified.js
Registered 'vertex-anthropic' in PROVIDERS mapping and extended Vertex-specific configuration logic to include vertex-anthropic alongside vertex.
CLI Config & Types
apps/cli/src/commands/models/custom-providers.ts, apps/cli/src/commands/models/types.ts
Added VERTEX_ANTHROPIC entry to customProviderConfigs and CUSTOM_PROVIDER_IDS, mirroring Vertex provider checks and optional baseURL prompting.
Task Manager Model Handling
scripts/modules/task-manager/models.js
Added hint/branch for VERTEX_ANTHROPIC in model selection and updated error message listing valid providers.
Changelog
.changeset/vertex-anthropic-provider.md
New changeset documenting addition of the Vertex Anthropic provider.
Minor cleanup
src/ai-providers/google-vertex.js
Removed an unused import (resolveEnvVariable).

Sequence Diagram

sequenceDiagram
    participant Client
    participant Provider as VertexAnthropicProvider
    participant VertexAPI as Vertex AI API

    Client->>Provider: instantiate/getClient(params: creds, projectId, location, baseURL?)
    activate Provider
    Provider->>Provider: validate credentials (API key or service account)
    alt credentials valid
        Provider->>Provider: createVertexAnthropic(config, fetch)
        Provider->>VertexAPI: initialize Anthropic endpoint / client
        VertexAPI-->>Provider: client ready
        Provider-->>Client: return client instance
    else credentials invalid
        Provider->>Provider: handleError()
        Provider-->>Client: throw authentication error
    end
    deactivate Provider
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related issues

Possibly related PRs

  • PR #1542: Overlaps Vertex authentication/config handling used by vertex-anthropic.
  • PR #1372: Adds new AI provider patterns and registry changes similar to this PR.
  • PR #1387: Introduces proxy/fetch support used by provider implementations (createProxyFetch integration).

Suggested reviewers

  • eyaltoledano
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly summarizes the main change: adding support for a new Vertex AI Anthropic provider. It is concise, specific, and directly reflects the core objective of the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/modules/task-manager/models.js (1)

700-703: ⚠️ Potential issue | 🟡 Minor

Add --vertex-anthropic to the error message for consistency.

The error message listing available provider flags doesn't include the new --vertex-anthropic flag, which could confuse users trying to use the new provider.

🐛 Proposed fix
-						message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
+						message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/modules/task-manager/models.js` around lines 700 - 703, Update the
error message returned when a model ID is not found in Taskmaster's supported
models (the object containing code: 'MODEL_NOT_FOUND_NO_HINT' and its message in
scripts/modules/task-manager/models.js) to include the new provider flag
--vertex-anthropic in the comma-separated list of flags (alongside --openrouter,
--ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible,
--gemini-cli, --codex-cli) so users see the new option; modify only the message
string for the MODEL_NOT_FOUND_NO_HINT response to append --vertex-anthropic in
the same style and punctuation as the other flags.
🧹 Nitpick comments (1)
src/ai-providers/google-vertex-anthropic.js (1)

8-11: Remove unused import resolveEnvVariable.

The resolveEnvVariable function is imported but never used in this file. Environment variable resolution is handled by ai-services-unified.js before calling the provider.

♻️ Proposed fix
 import { createVertexAnthropic } from '@ai-sdk/google-vertex/anthropic';
-import { resolveEnvVariable } from '../../scripts/modules/utils.js';
 import { log } from '../../scripts/modules/utils.js';
 import { BaseAIProvider } from './base-provider.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ai-providers/google-vertex-anthropic.js` around lines 8 - 11, The import
resolveEnvVariable from '../../scripts/modules/utils.js' is unused in this
module; remove that named import so only used symbols (log) are imported,
leaving the other imports (createVertexAnthropic from
'@ai-sdk/google-vertex/anthropic' and BaseAIProvider) untouched; ensure there
are no remaining references to resolveEnvVariable in the file (e.g., in any
functions or class methods) before committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@scripts/modules/task-manager/models.js`:
- Around line 700-703: Update the error message returned when a model ID is not
found in Taskmaster's supported models (the object containing code:
'MODEL_NOT_FOUND_NO_HINT' and its message in
scripts/modules/task-manager/models.js) to include the new provider flag
--vertex-anthropic in the comma-separated list of flags (alongside --openrouter,
--ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible,
--gemini-cli, --codex-cli) so users see the new option; modify only the message
string for the MODEL_NOT_FOUND_NO_HINT response to append --vertex-anthropic in
the same style and punctuation as the other flags.

---

Nitpick comments:
In `@src/ai-providers/google-vertex-anthropic.js`:
- Around line 8-11: The import resolveEnvVariable from
'../../scripts/modules/utils.js' is unused in this module; remove that named
import so only used symbols (log) are imported, leaving the other imports
(createVertexAnthropic from '@ai-sdk/google-vertex/anthropic' and
BaseAIProvider) untouched; ensure there are no remaining references to
resolveEnvVariable in the file (e.g., in any functions or class methods) before
committing.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: dfe1f1b9-9a1b-466b-80d3-2a627236bd80

📥 Commits

Reviewing files that changed from the base of the PR and between 2d1211b and fa34d8e.

📒 Files selected for processing (8)
  • apps/cli/src/commands/models/custom-providers.ts
  • apps/cli/src/commands/models/types.ts
  • packages/tm-core/src/common/constants/providers.ts
  • scripts/modules/ai-services-unified.js
  • scripts/modules/task-manager/models.js
  • src/ai-providers/google-vertex-anthropic.js
  • src/ai-providers/index.js
  • tests/unit/scripts/modules/task-manager/models-baseurl.test.js

@Crunchyman-ralph Crunchyman-ralph changed the base branch from main to next March 17, 2026 15:07
- Remove VertexAnthropicProvider duplication by extending VertexAIProvider
  instead of BaseAIProvider (reuses auth, validation, and error handling)
- Remove unused resolveEnvVariable import from both vertex providers
- Add --vertex-anthropic to MODEL_NOT_FOUND_NO_HINT error message
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/ai-providers/google-vertex-anthropic.js (2)

13-59: Add direct unit coverage for this adapter.

The only test change in this PR is in the model-selection path, so createVertexAnthropic construction and error handling here are still untested. A small provider-level test would catch regressions in auth and fetch wiring quickly.

Based on learnings: Create unit tests in tests/unit/ai-providers/<provider-name>.test.js that mock the provider's AI SDK module and test each exported function for correct client instantiation, parameter passing, result handling, and error handling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ai-providers/google-vertex-anthropic.js` around lines 13 - 59, Add direct
unit tests for the VertexAnthropicProvider to cover client construction and
error handling: write tests under
tests/unit/ai-providers/google-vertex-anthropic.test.js that mock the
createVertexAnthropic export and assert that VertexAnthropicProvider.getClient
passes the correct params (projectId, location, apiKey or credentials mapped to
googleAuthOptions, optional baseURL, and fetch from createProxyFetch) and
returns the mocked client; also include a test where the mocked
createVertexAnthropic throws and verify getClient triggers handleError (spy or
stub on VertexAnthropicProvider.handleError) to exercise the catch path. Use the
unique symbols VertexAnthropicProvider, getClient, createVertexAnthropic,
createProxyFetch, and handleError to locate targets in the code.

33-45: Normalize credentials type for safety in both Vertex providers.

isValidCredential() accepts string | object, but getClient() spreads credentials without type checking. While ai-services-unified.js currently normalizes string paths to objects before calling the provider, this defensive gap could cause issues if code evolves. Add type normalization in both google-vertex.js and google-vertex-anthropic.js to ensure consistency between validation and usage.

Suggested normalization
const normalizedCredentials =
  typeof credentials === 'string'
    ? { keyFilename: credentials }
    : credentials;

Apply this before line 139 in google-vertex.js and before line 39 in google-vertex-anthropic.js, then use normalizedCredentials for spreading and fallback assignment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ai-providers/google-vertex-anthropic.js` around lines 33 - 45, Normalize
the credentials value before spreading in the provider client setup: in
google-vertex-anthropic.js (and likewise in google-vertex.js) create a
normalizedCredentials variable that converts a string credentials to {
keyFilename: credentials } and leaves objects untouched, then use
normalizedCredentials everywhere you currently spread or assign credentials
(e.g., where authOptions.googleAuthOptions and any getClient() calls use
credentials) so isValidCredential()’s accepted types match actual usage and
avoid runtime type errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/modules/task-manager/models.js`:
- Line 701: The recovery hint for unknown model IDs is missing the
`--claude-code` option even though `setModel()` supports
CUSTOM_PROVIDERS.CLAUDE_CODE; update the message string (the template that
references modelId) to include `--claude-code` among the listed flags so users
can disambiguate custom Claude Code models.
- Around line 586-590: Add the missing Vertex Anthropic provider to the
registries: update the keyMap object in config-manager.js to include an entry
for CUSTOM_PROVIDERS.VERTEX_ANTHROPIC mapping to the correct API key config so
isApiKeySet() and getMcpApiKeyStatus() can detect its key, and update
supported-models.json to register "vertex-anthropic" in MODEL_MAP and the list
used by getAvailableModels() with appropriate model metadata (display name,
family, and any required fields) so MODEL_MAP validation and model discovery
recognize the provider.

---

Nitpick comments:
In `@src/ai-providers/google-vertex-anthropic.js`:
- Around line 13-59: Add direct unit tests for the VertexAnthropicProvider to
cover client construction and error handling: write tests under
tests/unit/ai-providers/google-vertex-anthropic.test.js that mock the
createVertexAnthropic export and assert that VertexAnthropicProvider.getClient
passes the correct params (projectId, location, apiKey or credentials mapped to
googleAuthOptions, optional baseURL, and fetch from createProxyFetch) and
returns the mocked client; also include a test where the mocked
createVertexAnthropic throws and verify getClient triggers handleError (spy or
stub on VertexAnthropicProvider.handleError) to exercise the catch path. Use the
unique symbols VertexAnthropicProvider, getClient, createVertexAnthropic,
createProxyFetch, and handleError to locate targets in the code.
- Around line 33-45: Normalize the credentials value before spreading in the
provider client setup: in google-vertex-anthropic.js (and likewise in
google-vertex.js) create a normalizedCredentials variable that converts a string
credentials to { keyFilename: credentials } and leaves objects untouched, then
use normalizedCredentials everywhere you currently spread or assign credentials
(e.g., where authOptions.googleAuthOptions and any getClient() calls use
credentials) so isValidCredential()’s accepted types match actual usage and
avoid runtime type errors.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4fc3ba26-2bc4-4f03-bf35-39af9fec93ef

📥 Commits

Reviewing files that changed from the base of the PR and between fa34d8e and 607c597.

📒 Files selected for processing (4)
  • .changeset/vertex-anthropic-provider.md
  • scripts/modules/task-manager/models.js
  • src/ai-providers/google-vertex-anthropic.js
  • src/ai-providers/google-vertex.js
💤 Files with no reviewable changes (1)
  • src/ai-providers/google-vertex.js

Comment on lines +586 to +590
} else if (providerHint === CUSTOM_PROVIDERS.VERTEX_ANTHROPIC) {
// Set provider without model validation since Vertex Anthropic models are managed by Google Cloud
determinedProvider = CUSTOM_PROVIDERS.VERTEX_ANTHROPIC;
warningMessage = `Warning: Custom Vertex AI Anthropic model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`;
report('warn', warningMessage);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n -C3 '\bVERTEX_ANTHROPIC\b|vertex-anthropic' \
  scripts/modules/config-manager.js \
  scripts/modules/supported-models.json \
  scripts/modules/ai-services-unified.js \
  packages/tm-core/src/common/constants/providers.ts

Repository: eyaltoledano/claude-task-master

Length of output: 1990


🏁 Script executed:

#!/bin/bash
# Check config-manager.js for vertex-anthropic provider support
echo "=== Checking config-manager.js ==="
rg -n 'VALID_PROVIDERS|MODEL_MAP|keyMap|vertex-anthropic' scripts/modules/config-manager.js | head -50

# Check if vertex-anthropic is in supported-models.json
echo -e "\n=== Checking supported-models.json ==="
grep -n 'vertex-anthropic' scripts/modules/supported-models.json

Repository: eyaltoledano/claude-task-master

Length of output: 1228


🏁 Script executed:

#!/bin/bash
# View keyMap and surrounding context
echo "=== keyMap definition in config-manager.js ==="
sed -n '895,930p' scripts/modules/config-manager.js

# Check for VALID_PROVIDERS
echo -e "\n=== Checking for VALID_PROVIDERS ==="
rg -n 'VALID_PROVIDERS\s*=' scripts/modules/config-manager.js

# View supported-models.json structure
echo -e "\n=== supported-models.json structure (first 100 lines) ==="
head -100 scripts/modules/supported-models.json

Repository: eyaltoledano/claude-task-master

Length of output: 3425


🏁 Script executed:

#!/bin/bash
# Check full supported-models.json for vertex-anthropic
echo "=== Full search for vertex-anthropic in supported-models.json ==="
grep -c 'vertex-anthropic' scripts/modules/supported-models.json || echo "NOT FOUND"

# List all provider keys in supported-models.json
echo -e "\n=== All provider keys in supported-models.json ==="
grep -o '^\s*"[^"]*":' scripts/modules/supported-models.json | head -20

Repository: eyaltoledano/claude-task-master

Length of output: 528


🏁 Script executed:

#!/bin/bash
# Extract top-level provider keys from supported-models.json using jq
echo "=== Providers registered in supported-models.json ==="
jq 'keys' scripts/modules/supported-models.json

Repository: eyaltoledano/claude-task-master

Length of output: 353


Add vertex-anthropic to config-manager.js and supported-models.json registries.

Lines 586–590 allow users to set VERTEX_ANTHROPIC as the provider, but the integration is incomplete. The provider is missing from:

  1. keyMap in config-manager.js (lines 895–912) — isApiKeySet() and getMcpApiKeyStatus() will fail for this provider.
  2. supported-models.jsongetAvailableModels() will not discover vertex-anthropic models, and MODEL_MAP validation will not recognize the provider.

Users can persist the provider here but will encounter broken key-status checks and missing model metadata downstream. Add the provider to both registries with the appropriate API key mapping and model metadata.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/modules/task-manager/models.js` around lines 586 - 590, Add the
missing Vertex Anthropic provider to the registries: update the keyMap object in
config-manager.js to include an entry for CUSTOM_PROVIDERS.VERTEX_ANTHROPIC
mapping to the correct API key config so isApiKeySet() and getMcpApiKeyStatus()
can detect its key, and update supported-models.json to register
"vertex-anthropic" in MODEL_MAP and the list used by getAvailableModels() with
appropriate model metadata (display name, family, and any required fields) so
MODEL_MAP validation and model discovery recognize the provider.

error: {
code: 'MODEL_NOT_FOUND_NO_HINT',
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Include --claude-code in the recovery hint.

setModel() still supports CUSTOM_PROVIDERS.CLAUDE_CODE, so this message is missing one valid way to disambiguate a custom model.

💡 Proposed fix
-						message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
+						message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --claude-code, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/modules/task-manager/models.js` at line 701, The recovery hint for
unknown model IDs is missing the `--claude-code` option even though `setModel()`
supports CUSTOM_PROVIDERS.CLAUDE_CODE; update the message string (the template
that references modelId) to include `--claude-code` among the listed flags so
users can disambiguate custom Claude Code models.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/unit/ai-services-unified.test.js (1)

234-240: Couple the VertexAnthropicProvider mock contract to VertexAIProvider behavior.

Because production VertexAnthropicProvider inherits from VertexAIProvider, duplicating API-key methods inline here can drift from base-provider behavior and hide provider-resolution regressions in unified service tests.

Refactor suggestion
+const createMockVertexProvider = () => ({
+	generateText: jest.fn(),
+	streamText: jest.fn(),
+	generateObject: jest.fn(),
+	getRequiredApiKeyName: jest.fn(() => null),
+	isRequiredApiKey: jest.fn(() => false)
+});
+
+const mockVertexAIProvider = createMockVertexProvider();
+const mockVertexAnthropicProvider = createMockVertexProvider();

 ...
-	VertexAIProvider: jest.fn(() => ({
-		generateText: jest.fn(),
-		streamText: jest.fn(),
-		generateObject: jest.fn(),
-		getRequiredApiKeyName: jest.fn(() => null),
-		isRequiredApiKey: jest.fn(() => false)
-	})),
+	VertexAIProvider: jest.fn(() => mockVertexAIProvider),
 	VertexAnthropicProvider: jest.fn(() => ({
-		generateText: jest.fn(),
-		streamText: jest.fn(),
-		generateObject: jest.fn(),
-		getRequiredApiKeyName: jest.fn(() => null),
-		isRequiredApiKey: jest.fn(() => false)
-	})),
+	VertexAnthropicProvider: jest.fn(() => mockVertexAnthropicProvider),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/ai-services-unified.test.js` around lines 234 - 240, The
VertexAnthropicProvider test mock duplicates API-key methods and should delegate
those to the VertexAIProvider behavior instead of hardcoding them; update the
VertexAnthropicProvider jest.fn mock to reuse or extend the VertexAIProvider
mock/implementation for getRequiredApiKeyName and isRequiredApiKey (e.g., obtain
the VertexAIProvider mock instance or prototype and call/assign its
getRequiredApiKeyName and isRequiredApiKey into the VertexAnthropicProvider
mock) so the unified tests track base-provider behavior and avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/unit/ai-services-unified.test.js`:
- Around line 234-240: The VertexAnthropicProvider test mock duplicates API-key
methods and should delegate those to the VertexAIProvider behavior instead of
hardcoding them; update the VertexAnthropicProvider jest.fn mock to reuse or
extend the VertexAIProvider mock/implementation for getRequiredApiKeyName and
isRequiredApiKey (e.g., obtain the VertexAIProvider mock instance or prototype
and call/assign its getRequiredApiKeyName and isRequiredApiKey into the
VertexAnthropicProvider mock) so the unified tests track base-provider behavior
and avoid drift.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2855da73-3385-4a59-bd99-2ef0391c0339

📥 Commits

Reviewing files that changed from the base of the PR and between 607c597 and eb5331b.

📒 Files selected for processing (1)
  • tests/unit/ai-services-unified.test.js

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

error: {
code: 'MODEL_NOT_FOUND_NO_HINT',
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CLI flag --vertex-anthropic never defined, breaking advertised workflow

High Severity

The error message now tells users to specify --vertex-anthropic as a CLI flag, but this option is never defined in the Commander.js command setup in commands.js. There is no .option('--vertex-anthropic', ...) call, no entry in the providerFlags validation array, and no mapping in the providerHint ternary chain. Running task-master models --set-main claude-sonnet-4-6 --vertex-anthropic will fail because Commander doesn't recognize the flag. The interactive setup path works via custom-providers.ts, but the direct set workflow advertised in the PR description and error message is non-functional.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants